668 research outputs found

    Optimization Under Uncertainty Using the Generalized Inverse Distribution Function

    Full text link
    A framework for robust optimization under uncertainty based on the use of the generalized inverse distribution function (GIDF), also called quantile function, is here proposed. Compared to more classical approaches that rely on the usage of statistical moments as deterministic attributes that define the objectives of the optimization process, the inverse cumulative distribution function allows for the use of all the possible information available in the probabilistic domain. Furthermore, the use of a quantile based approach leads naturally to a multi-objective methodology which allows an a-posteriori selection of the candidate design based on risk/opportunity criteria defined by the designer. Finally, the error on the estimation of the objectives due to the resolution of the GIDF will be proven to be quantifiableComment: 20 pages, 25 figure

    Adaptive multi‐index collocation for uncertainty quantification and sensitivity analysis

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/154316/1/nme6268.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/154316/2/NME_6268_novelty.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/154316/3/nme6268_am.pd

    Pricing and Hedging Asian Basket Options with Quasi-Monte Carlo Simulations

    Get PDF
    In this article we consider the problem of pricing and hedging high-dimensional Asian basket options by Quasi-Monte Carlo simulation. We assume a Black-Scholes market with time-dependent volatilities and show how to compute the deltas by the aid of the Malliavin Calculus, extending the procedure employed by Montero and Kohatsu-Higa (2003). Efficient path-generation algorithms, such as Linear Transformation and Principal Component Analysis, exhibit a high computational cost in a market with time-dependent volatilities. We present a new and fast Cholesky algorithm for block matrices that makes the Linear Transformation even more convenient. Moreover, we propose a new-path generation technique based on a Kronecker Product Approximation. This construction returns the same accuracy of the Linear Transformation used for the computation of the deltas and the prices in the case of correlated asset returns while requiring a lower computational time. All these techniques can be easily employed for stochastic volatility models based on the mixture of multi-dimensional dynamics introduced by Brigo et al. (2004).Comment: 16 page

    A model assessing cost of operating marine systems using data obtained from Monte Carlo analysis

    Get PDF
    This article presents a methodology for analysing the cost of operating marine systems under varying conditions. Data obtained from a previously developed Monte Carlo analysis are applied to assess the operational costs for various maintenance and inspection policies. The concept of total insured value is also applied to determine the cost attributed to risk. The aim is to show that Monte Carlo analysis can be adapted to provide information on various factors affecting operational costs to be used for decision-making to optimise the efficiency of marine systems. A method of modelling the effects of lead times due to un-stocked items has also been included to increase the scope of the analysis

    A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species.</p> <p>Results</p> <p>We present four techniques, derivative approximation (DA), polynomial approximation (PA), Gauss-Hermite integration (GHI), and orthonormal Hermite approximation (OHA), for <it>analytically </it>approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the four approximation techniques considered in this paper is orders of magnitude smaller than traditional Monte Carlo estimation. Software, coded in MATLAB<sup>ÂŽ</sup>, which implements all sensitivity analysis techniques discussed in this paper, is available free of charge.</p> <p>Conclusions</p> <p>Estimating variance-based sensitivity indices of a large biochemical reaction system is a computationally challenging task that can only be addressed via approximations. Among the methods presented in this paper, a technique based on orthonormal Hermite polynomials seems to be an acceptable candidate for the job, producing very good approximation results for a wide range of uncertainty levels in a fraction of the time required by traditional Monte Carlo sampling.</p

    Enhancing quantum efficiency of thin-film silicon solar cells by Pareto optimality

    Get PDF
    We present a composite design methodology for the simulation and optimization of the solar cell performance. Our method is based on the synergy of different computational techniques and it is especially designed for the thin-film cell technology. In particular, we aim to efficiently simulate light trapping and plasmonic effects to enhance the light harvesting of the cell. The methodology is based on the sequential application of a hierarchy of approaches: (a) full Maxwell simulations are applied to derive the photon’s scattering probability in systems presenting textured interfaces; (b) calibrated Photonic Monte Carlo is used in junction with the scattering matrices method to evaluate coherent and scattered photon absorption in the full cell architectures; (c) the results of these advanced optical simulations are used as the pair generation terms in model implemented in an effective Technology Computer Aided Design tool for the derivation of the cell performance; (d) the models are investigated by qualitative and quantitative sensitivity analysis algorithms, to evaluate the importance of the design parameters considered on the models output and to get a first order descriptions of the objective space; (e) sensitivity analysis results are used to guide and simplify the optimization of the model achieved through both Single Objective Optimization (in order to fully maximize devices efficiency) and Multi Objective Optimization (in order to balance efficiency and cost); (f) Local, Global and “Glocal” robustness of optimal solutions found by the optimization algorithms are statistically evaluated; (g) data-based Identifiability Analysis is used to study the relationship between parameters. The results obtained show a noteworthy improvement with respect to the quantum efficiency of the reference cell demonstrating that the methodology presented is suitable for effective optimization of solar cell devices

    External Control of the GAL Network in S. cerevisiae: A View from Control Theory

    Get PDF
    While there is a vast literature on the control systems that cells utilize to regulate their own state, there is little published work on the formal application of control theory to the external regulation of cellular functions. This paper chooses the GAL network in S. cerevisiae as a well understood benchmark example to demonstrate how control theory can be employed to regulate intracellular mRNA levels via extracellular galactose. Based on a mathematical model reduced from the GAL network, we have demonstrated that a galactose dose necessary to drive and maintain the desired GAL genes' mRNA levels can be calculated in an analytic form. And thus, a proportional feedback control can be designed to precisely regulate the level of mRNA. The benefits of the proposed feedback control are extensively investigated in terms of stability and parameter sensitivity. This paper demonstrates that feedback control can both significantly accelerate the process to precisely regulate mRNA levels and enhance the robustness of the overall cellular control system

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO
    • …
    corecore